A Computable Theory for Learning Bayesian Networks based on MAP-MDL Principles
نویسندگان
چکیده
Bayesian networks provide a powerful intelligent information fusion architecture for modeling probabilistic and causal patterns involving multiple random variables. This paper advances a computable theory of learning discrete Bayesian networks from data. The theory is based on the MAP-MDL principles for maximizing the joint probability or interchangeably miniziming the joint description length of the data and the Bayesian network model including the network structure and the probability distribution parameters. The computable formalisms for the data likelihood given a Bayesian network structure, the description length of a structure, and the estimation of the parameters given a structure are derived. EM algorithms are constructed for handling incomplete and soft data.
منابع مشابه
Computational Machine Learning in Theory and Praxis Produced as Part of the Esprit Working Group in Neural and Computational Learning, Neurocolt 8556
In the last few decades a computational approach to machine learning has emerged based on paradigms from recursion theory and the theory of computation. Such ideas include learning in the limit, learning by enumer-ation, and probably approximately correct (pac) learning. These models usually are not suitable in practical situations. In contrast, statistics based inference methods have enjoyed a...
متن کاملComputational Machine Learning in Theory and Praxis
In the last few decades a computational approach to machine learning has emerged based on paradigms from recursion theory and the theory of computation. Such ideas include learning in the limit, learning by enumeration, and probably approximately correct (pac) learning. These models usually are not suitable in practical situations. In contrast, statistics based inference methods have enjoyed a ...
متن کاملPaper Learning Bayesian Belief Networks Based on the Minimum Description Length Principle: Basic Properties
SUMMARY This paper addresses the problem of learning Bayesian belief networks (BBN) based on the minimum description length (MDL) principle. First, we give a formula of description length based on which the MDL-based procedure learns a BBN. Secondly, we point out that the diierence between the MDL-based and Cooper and Herskovits procedures is essentially in the priors rather than in the approac...
متن کاملLearning Bayesian Belief Networks Based on the Minimum Description Length Principle : Basic Properties ∗
SUMMARY This paper addresses the problem of learning Bayesian belief networks (BBN) based on the minimum description length (MDL) principle. First, we give a formula of description length based on which the MDL-based procedure learns a BBN. Secondly, we point out that the difference between the MDL-based and Cooper and Herskovits procedures is essentially in the priors rather than in the approa...
متن کاملSafe Learning: bridging the gap between Bayes, MDL and statistical learning theory via empirical convexity
We extend Bayesian MAP and Minimum Description Length (MDL) learning by testing whether the data can be substantially more compressed by a mixture of the MDL/MAP distribution with another element of the model, and adjusting the learning rate if this is the case. While standard Bayes and MDL can fail to converge if the model is wrong, the resulting “safe” estimator continues to achieve good rate...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005